这项工作提出了一种基于形态重建和启发式方法的聚集算法,称为K-Morphological集合(K-MS)。在最坏情况下,K-MS比CPU并行K-均值快,并且可以增强数据集的可视化以及非常不同的聚类。它也比对密度和形状(例如有丝分裂和三升)敏感的类似聚类方法更快。另外,K-MS是确定性的,具有最大簇的内在含义,可以为给定的输入样本和输入参数创建,与K-均值和其他聚类算法不同。换句话说,给定恒定的k,一个结构元素和数据集,k-ms会在不使用随机/伪随机函数的情况下产生K或更少的簇。最后,所提出的算法还提供了一种简单的手段,可以从图像或数据集中删除噪声。
translated by 谷歌翻译
我们提出了一种方法,以使用回归算法来预测计算机断层扫描图像中心外膜和纵隔脂肪体积。获得的结果表明,可以高度相关性预测这些脂肪是可行的,从而减轻了两种脂肪体积的手动或自动分割的需求。取而代之的是,仅分割其中一个就足够了,而另一个则可以相当准确地预测。使用MLP回归器通过旋转森林算法获得的相关系数预测基于心外膜脂肪的纵隔脂肪的相关系数为0.9876,相对绝对误差为14.4%,根相对平方误差为15.7%。基于纵隔的心外膜脂肪预测中获得的最佳相关系数为0.9683,相对绝对误差为19.6%,相对平方误差为24.9%。此外,我们分析了使用线性回归器的可行性,该回归器提供了对基础近似值的直观解释。在这种情况下,根据心外膜预测纵隔脂肪的相关系数为0.9534,相对绝对误差为31.6%,根相对平方误差为30.1%。关于基于纵隔脂肪的心外膜脂肪的预测,相关系数为0.8531,相对绝对误差为50.43%,根相对平方误差为52.06%。总而言之,有可能加快一般医学分析以及通过使用这种预测方法在最新技术中采用的一些细分和量化方法,从而降低成本,因此可以实现预防治疗减少健康问题。
translated by 谷歌翻译
这项工作提出了一种名为形态学分类器(MC)的新型分类器。 MCS汇总数学形态学和监督学习的概念。该聚集的结果是可能在选择停止标准和结构元件的选择之外地保持类的形状特征的分类器。 MCS基本上基于集合理论,其分类模型可以是数学集本身。在当前的工作中提出了两种类型的形态分类剂,即形态学K-NN(MKNN)和形态扩张分类器(MDC),其证明了方法的可行性。这项工作提供了有关MCS的优势的证据,例如,非常快速的分类时间以及竞争精度率。使用P-Dimensional数据集测试MKNN和MDC的性能。在8个数据集中的5个中,MCS绑定或表现优于14种成熟的分类器。在所有场合,所获得的精度高于所有分类器获得的平均精度。此外,所提出的实施方式利用图形处理单元(GPU)的功率来加速处理。
translated by 谷歌翻译
社会机器人的快速发展刺激了人类运动建模,解释和预测,主动碰撞,人类机器人相互作用和共享空间中共同损害的积极研究。现代方法的目标需要高质量的数据集进行培训和评估。但是,大多数可用数据集都遭受了不准确的跟踪数据或跟踪人员的不自然的脚本行为。本文试图通过在语义丰富的环境中提供运动捕获,眼睛凝视跟踪器和板载机器人传感器的高质量跟踪信息来填补这一空白。为了诱导记录参与者的自然行为,我们利用了松散的脚本化任务分配,这使参与者以自然而有目的的方式导航到动态的实验室环境。本文介绍的运动数据集设置了高质量的标准,因为使用语义信息可以增强现实和准确的数据,从而使新算法的开发不仅依赖于跟踪信息,而且还依赖于移动代理的上下文提示,还依赖于跟踪信息。静态和动态环境。
translated by 谷歌翻译
本文研究了红外(IR)成像在乳房疾病检测中的潜在贡献。它比较了使用一些算法检测恶性乳房状况(例如支持向量机(SVM))在应用于公共数据时的一致性的结果。此外,为了利用实际IR成像的能力作为临床试验的补充,并使用高分辨率IR成像促进研究,我们认为使用了由自信训练的乳房医生修订的公共数据库是必不可少的。在我们的工作中,只有静态获取协议才被考虑。我们使用了来自Pro Engenharia(Proeng)公共数据库的LO2 IR单乳房图像(54个正常和48个发现)。这些图像是从联邦De Pernambuco大学(UFPE)大学医院收集的。我们采用了作者提出的相同功能,该功能使用顺序最小优化(SMO)分类器,获得了最佳结果,并获得了61.7%的准确性,而Youden指数为0.24。
translated by 谷歌翻译
这项工作提出了使用遗传算法(GA)在追踪和识别使用计算机断层扫描(CT)图像的人心包轮廓的过程中。我们假设心包的每个切片都可以通过椭圆建模,椭圆形需要最佳地确定其参数。最佳椭圆将是紧随心包轮廓的紧密椭圆形,因此,将人心脏的心外膜和纵隔脂肪适当地分开。追踪和自动识别心包轮廓辅助药物的医学诊断。通常,由于所需的努力,此过程是手动完成或根本不完成的。此外,检测心包可能会改善先前提出的自动化方法,这些方法将与人心脏相关的两种类型的脂肪分开。这些脂肪的量化提供了重要的健康风险标记信息,因为它们与某些心血管病理的发展有关。最后,我们得出的结论是,GA在可行数量的处理时间内提供了令人满意的解决方案。
translated by 谷歌翻译
We derive a set of causal deep neural networks whose architectures are a consequence of tensor (multilinear) factor analysis. Forward causal questions are addressed with a neural network architecture composed of causal capsules and a tensor transformer. The former estimate a set of latent variables that represent the causal factors, and the latter governs their interaction. Causal capsules and tensor transformers may be implemented using shallow autoencoders, but for a scalable architecture we employ block algebra and derive a deep neural network composed of a hierarchy of autoencoders. An interleaved kernel hierarchy preprocesses the data resulting in a hierarchy of kernel tensor factor models. Inverse causal questions are addressed with a neural network that implements multilinear projection and estimates the causes of effects. As an alternative to aggressive bottleneck dimension reduction or regularized regression that may camouflage an inherently underdetermined inverse problem, we prescribe modeling different aspects of the mechanism of data formation with piecewise tensor models whose multilinear projections are well-defined and produce multiple candidate solutions. Our forward and inverse neural network architectures are suitable for asynchronous parallel computation.
translated by 谷歌翻译
Model estimates obtained from traditional subspace identification methods may be subject to significant variance. This elevated variance is aggravated in the cases of large models or of a limited sample size. Common solutions to reduce the effect of variance are regularized estimators, shrinkage estimators and Bayesian estimation. In the current work we investigate the latter two solutions, which have not yet been applied to subspace identification. Our experimental results show that our proposed estimators may reduce the estimation risk up to $40\%$ of that of traditional subspace methods.
translated by 谷歌翻译
User equipment is one of the main bottlenecks facing the gaming industry nowadays. The extremely realistic games which are currently available trigger high computational requirements of the user devices to run games. As a consequence, the game industry has proposed the concept of Cloud Gaming, a paradigm that improves gaming experience in reduced hardware devices. To this end, games are hosted on remote servers, relegating users' devices to play only the role of a peripheral for interacting with the game. However, this paradigm overloads the communication links connecting the users with the cloud. Therefore, service experience becomes highly dependent on network connectivity. To overcome this, Cloud Gaming will be boosted by the promised performance of 5G and future 6G networks, together with the flexibility provided by mobility in multi-RAT scenarios, such as WiFi. In this scope, the present work proposes a framework for measuring and estimating the main E2E metrics of the Cloud Gaming service, namely KQIs. In addition, different machine learning techniques are assessed for predicting KQIs related to Cloud Gaming user's experience. To this end, the main key quality indicators (KQIs) of the service such as input lag, freeze percent or perceived video frame rate are collected in a real environment. Based on these, results show that machine learning techniques provide a good estimation of these indicators solely from network-based metrics. This is considered a valuable asset to guide the delivery of Cloud Gaming services through cellular communications networks even without access to the user's device, as it is expected for telecom operators.
translated by 谷歌翻译
Stress has a great effect on people's lives that can not be understated. While it can be good, since it helps humans to adapt to new and different situations, it can also be harmful when not dealt with properly, leading to chronic stress. The objective of this paper is developing a stress monitoring solution, that can be used in real life, while being able to tackle this challenge in a positive way. The SMILE data set was provided to team Anxolotl, and all it was needed was to develop a robust model. We developed a supervised learning model for classification in Python, presenting the final result of 64.1% in accuracy and a f1-score of 54.96%. The resulting solution stood the robustness test, presenting low variation between runs, which was a major point for it's possible integration in the Anxolotl app in the future.
translated by 谷歌翻译